Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
1.
Concurrency and Computation: Practice and Experience ; 2023.
Article in English | Scopus | ID: covidwho-2323991

ABSTRACT

In this article, the detection of COVID-19 patient based on attention segmental recurrent neural network (ASRNN) with Archimedes optimization algorithm (AOA) using ultra-low-dose CT (ULDCT) images is proposed. Here, the ultra-low-dose CT images are gathered via real time dataset. The input images are preprocessed with the help of convolutional auto-encoder to recover the ULDCT images quality by removing noises. The preprocessed images are given to generalized additive models with structured interactions (GAMI) for extracting the radiomic features. The radiomic features, such as morphologic, gray scale statistic, Haralick texture are extracted using GAMI-Net. The ASRNN classifier, whose weight parameters optimized with Archimedes optimization algorithm enables COVID-19 ULDCT images classification as COVID-19 or normal. The proposed approach is activated in MATLAB platform. The proposed ASRNN-AOA-ULDCT attains accuracy 22.08%, 24.03%, 34.76%, 34.65%, 26.89%, 45.86%, and 32.14%;precision 23.34%, 26.45%, 34.98%, 27.06%, 35.87%, 34.44%, and 22.36% better than the existing methods, such as DenseNet-HHO-ULDCT, ELM-DNN-ULDCT, EDL-ULDCT, ResNet 50-ULDCT, SDL-ULDCT, CNN-ULDCT, and DRNN-ULDCT, respectively. © 2023 John Wiley & Sons, Ltd.

2.
International Journal of Pattern Recognition & Artificial Intelligence ; : 1, 2023.
Article in English | Academic Search Complete | ID: covidwho-2319097

ABSTRACT

COVID-19 is known in recent times as a severe syndrome of respiratory organ (Lungs) and has gradually produced pneumonia, a lung disorder all around the world. As coronavirus is continually spreading rapidly globally, the computed tomography (CT) technique has been made important and essential for quick diagnosis of this dangerous syndrome. Hence, it is necessitated to develop a precise computer-based technique for assisting medical clinicians in identifying the COVID-19 influenced patients with the help of CT scan images. Therefore, the multilayer perceptron neural networks optimized with Garra Rufa Fish optimization using images of CT scan is proposed in this paper for the classification of COVID-19 patients (COV-19-MPNN-GRF-CTI). The input images are taken from SARS-COV-2 CT-scan dataset. Initially, the input images are pre-processed utilizing convolutional auto-encoder (CAE) to enhance the quality of the input images by eliminating noises. The pre-processed images are fed to Residual Network (ResNet-50) for extracting the global and statistical features. The extraction over the features of CT scan images is made through ResNet-50 and subsequently input to multilayer perceptron neural networks (MPNN) for CT images classification as COVID-19 and Non-COVID-19 patients. Here, the layer of Batch Normalization of the MPNN is separated and added with ResNet-50 layer. Generally, MPNN classifier does not divulge any adoption of optimization approach for calculating the optimal parameters and accurately classifying the extracted features of CT images. The Garra Rufa Fish (GRF) optimization algorithm performs to optimize the weight parameters of MPNN classifiers. The proposed approach is executed in MATLAB. The performance metrics, such as sensitivity, precision, specificity, F-measure, accuracy and error rate, are examined. Then the performance of the proposed COV-19-MPNN-GRF-CTI method provides 22.08%, 24.03%, 34.76% higher accuracy, 23.34%, 26.45%, 34.44% higher precision, 33.98%, 21.95%, 34.78% lower error rate compared with the existing methods, like multi-task deep learning using CT image analysis for COVID-19 pneumonia classification and segmentation (COV-19-MDP-CTI), COVID-19 classification utilizing CT scan depending on meta-classifier approach (COV-19-SEMC-CTI) and deep learning-based COVID-19 prediction utilizing CT scan images (COV-19-CNN-CTI), respectively. [ FROM AUTHOR] Copyright of International Journal of Pattern Recognition & Artificial Intelligence is the property of World Scientific Publishing Company and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full . (Copyright applies to all s.)

3.
New Gener Comput ; 40(4): 1053-1075, 2022.
Article in English | MEDLINE | ID: covidwho-2148761

ABSTRACT

The new type of coronavirus disease, which has spread from Wuhan, China since the beginning of 2020 called COVID-19, has caused many deaths and cases in most countries and has reached a global pandemic scale. In addition to test kits, imaging techniques with X-rays used in lung patients have been frequently used in the detection of COVID-19 cases. In the proposed method, a novel approach based on a deep learning model named DeepCovNet was utilized to classify chest X-ray images containing COVID-19, normal (healthy), and pneumonia classes. The convolutional-autoencoder model, which had convolutional layers in encoder and decoder blocks, was trained by using the processed chest X-ray images from scratch for deep feature extraction. The distinctive features were selected with a novel and robust algorithm named SDAR from the deep feature set. In the classification stage, an SVM classifier with various kernel functions was used to evaluate the classification performance of the proposed method. Also, hyperparameters of the SVM classifier were optimized with the Bayesian algorithm for increasing classification accuracy. Specificity, sensitivity, precision, and F-score, were also used as performance metrics in addition to accuracy which was used as the main criterion. The proposed method with an accuracy of 99.75 outperformed the other approaches based on deep learning.

4.
Sensors (Basel) ; 22(20)2022 Oct 17.
Article in English | MEDLINE | ID: covidwho-2071712

ABSTRACT

Research on face recognition with masked faces has been increasingly important due to the prolonged COVID-19 pandemic. To make face recognition practical and robust, a large amount of face image data should be acquired for training purposes. However, it is difficult to obtain masked face images for each human subject. To cope with this difficulty, this paper proposes a simple yet practical method to synthesize a realistic masked face for an unseen face image. For this, a cascade of two convolutional auto-encoders (CAEs) has been designed. The former CAE generates a pose-alike face wearing a mask pattern, which is expected to fit the input face in terms of pose view. The output of the former CAE is readily fed into the secondary CAE for extracting a segmentation map that localizes the mask region on the face. Using the segmentation map, the mask pattern can be successfully fused with the input face by means of simple image processing techniques. The proposed method relies on face appearance reconstruction without any facial landmark detection or localization techniques. Extensive experiments with the GTAV Face database and Labeled Faces in the Wild (LFW) database show that the two complementary generators could rapidly and accurately produce synthetic faces even for challenging input faces (e.g., low-resolution face of 25 × 25 pixels with out-of-plane rotations).


Subject(s)
COVID-19 , Facial Recognition , Humans , Pandemics , Image Processing, Computer-Assisted/methods , Databases, Factual
5.
3rd International Workshop of Advances in Simplifying Medical Ultrasound, ASMUS 2022, held in Conjunction with 25th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2022 ; 13565 LNCS:23-33, 2022.
Article in English | Scopus | ID: covidwho-2059734

ABSTRACT

The need for summarizing long medical scan videos for automatic triage in Emergency Departments and transmission of the summarized videos for telemedicine has gained significance during the COVID-19 pandemic. However, supervised learning schemes for summarizing videos are infeasible as manual labeling of scans for large datasets is impractical by frontline clinicians. This work presents a methodology to summarize ultrasound videos using completely unsupervised learning schemes and is validated on Lung Ultrasound videos. A Convolutional Autoencoder and a Transformer decoder is trained in an unsupervised reinforcement learning setup i.e., without supervised labels in the whole workflow. Novel precision and recall computation for ultrasound videos is also presented employing which high Precision and F1 scores of 64.36% and 35.87% with an average video compression rate of 78% is obtained when validated against clinically annotated cases. Even though demonstrated using lung ultrasound videos, our approach can be readily extended to other imaging modalities. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

6.
Biomed Signal Process Control ; 73: 103436, 2022 Mar.
Article in English | MEDLINE | ID: covidwho-1562181

ABSTRACT

Background and Objectives: The COVID-19 pandemic manifested the need of developing robust digital platforms for facilitating healthcare services such as consultancy, clinical therapies, real time remote monitoring, early diagnosis and future predictions. Innovations made using technologies such as Internet of Things (IoT), edge computing, cloud computing and artificial intelligence are helping address this crisis. The urge for remote monitoring, symptom analysis and early detection of diseases lead to tremendous increase in the deployment of wearable sensor devices. They facilitate seamless gathering of physiological data such as electrocardiogram (ECG) signals, respiration traces (RESP), galvanic skin response (GSR), pulse rate, body temperature, photoplethysmograms (PPG), oxygen saturation (SpO2) etc. For diagnosis and analysis purpose, the gathered data needs to be stored. Wearable devices operate on batteries and have a memory constraint. In mHealth application architectures, this gathered data is hence stored on cloud based servers. While transmitting data from wearable devices to cloud servers via edge devices, a lot of energy is consumed. This paper proposes a deep learning based compression model SCAElite that reduces the data volume, enabling energy efficient transmission. Results: Stress Recognition in Automobile Drivers dataset and MIT-BIH dataset from PhysioNet are used for validation of algorithm performance. The model achieves a compression ratio of up to 300 fold with reconstruction errors within 8% over the stress recognition dataset and 106.34-fold with reconstruction errors within 8% over the MIT-BIH dataset. The computational complexity of SCAElite is 51.65% less compared to state-of-the-art deep compressive model. Conclusion: It is experimentally validated that SCAElite guarantees a high compression ratio with good quality restoration capabilities for physiological signal compression in mHealth applications. It has a compact architecture and is computationally more efficient compared to state-of-the-art deep compressive model.

7.
Pattern Recognit ; 123: 108403, 2022 Mar.
Article in English | MEDLINE | ID: covidwho-1482848

ABSTRACT

This study proposes a contrastive convolutional auto-encoder (contrastive CAE), a combined architecture of an auto-encoder and contrastive loss, to identify individuals with suspected COVID-19 infection using heart-rate data from participants with multiple sclerosis (MS) in the ongoing RADAR-CNS mHealth research project. Heart-rate data was remotely collected using a Fitbit wristband. COVID-19 infection was either confirmed through a positive swab test, or inferred through a self-reported set of recognised symptoms of the virus. The contrastive CAE outperforms a conventional convolutional neural network (CNN), a long short-term memory (LSTM) model, and a convolutional auto-encoder without contrastive loss (CAE). On a test set of 19 participants with MS with reported symptoms of COVID-19, each one paired with a participant with MS with no COVID-19 symptoms, the contrastive CAE achieves an unweighted average recall of 95.3 % , a sensitivity of 100 % and a specificity of 90.6 % , an area under the receiver operating characteristic curve (AUC-ROC) of 0.944, indicating a maximum successful detection of symptoms in the given heart rate measurement period, whilst at the same time keeping a low false alarm rate.

8.
J Med Biol Eng ; 41(5): 678-689, 2021.
Article in English | MEDLINE | ID: covidwho-1392062

ABSTRACT

Purpose: In early 2020, the world is amid a significant pandemic due to the novel coronavirus disease outbreak, commonly called the COVID-19. Coronavirus is a lung infection disease caused by the Severe Acute Respiratory Syndrome Coronavirus 2 virus (SARS-CoV-2). Because of its high transmission rate, it is crucial to detect cases as soon as possible to effectively control the spread of this pandemic and treat patients in the early stages. RT-PCR-based kits are the current standard kits used for COVID-19 diagnosis, but these tests take much time despite their high precision. A faster automated diagnostic tool is required for the effective screening of COVID-19. Methods: In this study, a new semi-supervised feature learning technique is proposed to screen COVID-19 patients using chest CT scans. The model proposed in this study uses a three-step architecture, consisting of a convolutional autoencoder based unsupervised feature extractor, a multi-objective genetic algorithm (MOGA) based feature selector, and a Bagging Ensemble of support vector machines based binary classifier. The proposed architecture has been designed to provide precise and robust diagnostics for binary classification (COVID vs.nonCOVID). A dataset of 1252 COVID-19 CT scan images, collected from 60 patients, has been used to train and evaluate the model. Results: The best performing classifier within 127 ms per image achieved an accuracy of 98.79%, the precision of 98.47%, area under curve of 0.998, and an F1 score of 98.85% on 497 test images. The proposed model outperforms the current state of the art COVID-19 diagnostic techniques in terms of speed and accuracy. Conclusion: The experimental results prove the superiority of the proposed methodology in comparison to existing methods.The study also comprehensively compares various feature selection techniques and highlights the importance of feature selection in medical image data problems.

9.
J Clin Med ; 10(14)2021 Jul 14.
Article in English | MEDLINE | ID: covidwho-1314673

ABSTRACT

The COVID-19 pandemic continues to spread globally at a rapid pace, and its rapid detection remains a challenge due to its rapid infectivity and limited testing availability. One of the simply available imaging modalities in clinical routine involves chest X-ray (CXR), which is often used for diagnostic purposes. Here, we proposed a computer-aided detection of COVID-19 in CXR imaging using deep and conventional radiomic features. First, we used a 2D U-Net model to segment the lung lobes. Then, we extracted deep latent space radiomics by applying deep convolutional autoencoder (ConvAE) with internal dense layers to extract low-dimensional deep radiomics. We used Johnson-Lindenstrauss (JL) lemma, Laplacian scoring (LS), and principal component analysis (PCA) to reduce dimensionality in conventional radiomics. The generated low-dimensional deep and conventional radiomics were integrated to classify COVID-19 from pneumonia and healthy patients. We used 704 CXR images for training the entire model (i.e., U-Net, ConvAE, and feature selection in conventional radiomics). Afterward, we independently validated the whole system using a study cohort of 1597 cases. We trained and tested a random forest model for detecting COVID-19 cases through multivariate binary-class and multiclass classification. The maximal (full multivariate) model using a combination of the two radiomic groups yields performance in classification cross-validated accuracy of 72.6% (69.4-74.4%) for multiclass and 89.6% (88.4-90.7%) for binary-class classification.

10.
Comput Commun ; 176: 234-248, 2021 Aug 01.
Article in English | MEDLINE | ID: covidwho-1272369

ABSTRACT

The novel 2019 coronavirus disease (COVID-19) has infected over 141 million people worldwide since April 20, 2021. More than 200 countries around the world have been affected by the coronavirus pandemic. Screening for COVID-19, we use fast and inexpensive images from computed tomography (CT) scans. In this paper, ResNet-50, VGG-16, convolutional neural network (CNN), convolutional auto-encoder neural network (CAENN), and machine learning (ML) methods are proposed for classifying Chest CT Images of COVID-19. The dataset consists of 1252 CT scans that are positive and 1230 CT scans that are negative for COVID-19 virus. The proposed models have priority over the other models that there is no need of pre-trained networks and data augmentation for them. The classification accuracies of ResNet-50, VGG-16, CNN, and CAENN were obtained 92.24%, 94.07%, 93.84%, and 93.04% respectively. Among ML classifiers, the nearest neighbor (NN) had the highest performance with an accuracy of 94%.

SELECTION OF CITATIONS
SEARCH DETAIL